本文介绍了一种创新的贝叶斯机器学习算法,在不完美的顺应性存在下绘制可解释的对异质因果效应的推断(例如,在不规则的分配机制下)。我们通过蒙特卡罗模拟显示,据提出的贝叶斯因果森林具有乐器变量(BCF-IV)方法优于在控制各方误差率的同时发现和估算异质因果效果时量身定制的其他机器学习技术(或 - 在叶子水平时,不那么严格地 - 为假发现率)。 BCF-IV揭示了乐器可变场景中因果效应的异质性,而且,又为政策制定者提供了有针对性政策的相关工具。其实证应用评估了额外资金对学生表演的影响。结果表明,BCF-IV可用于增强学校资助对学生绩效的有效性。
translated by 谷歌翻译
Detecting persons in images or video with neural networks is a well-studied subject in literature. However, such works usually assume the availability of a camera of decent resolution and a high-performance processor or GPU to run the detection algorithm, which significantly increases the cost of a complete detection system. However, many applications require low-cost solutions, composed of cheap sensors and simple microcontrollers. In this paper, we demonstrate that even on such hardware we are not condemned to simple classic image processing techniques. We propose a novel ultra-lightweight CNN-based person detector that processes thermal video from a low-cost 32x24 pixel static imager. Trained and compressed on our own recorded dataset, our model achieves up to 91.62% accuracy (F1-score), has less than 10k parameters, and runs as fast as 87ms and 46ms on low-cost microcontrollers STM32F407 and STM32F746, respectively.
translated by 谷歌翻译
SchNetPack is a versatile neural networks toolbox that addresses both the requirements of method development and application of atomistic machine learning. Version 2.0 comes with an improved data pipeline, modules for equivariant neural networks as well as a PyTorch implementation of molecular dynamics. An optional integration with PyTorch Lightning and the Hydra configuration framework powers a flexible command-line interface. This makes SchNetPack 2.0 easily extendable with custom code and ready for complex training task such as generation of 3d molecular structures.
translated by 谷歌翻译
本文介绍了一个数据集,用于培训和评估方法,以估算由标准RGB摄像机捕获的任务演示中手持工具的6D姿势。尽管6D姿势估计方法取得了重大进展,但它们的性能通常受到严重遮挡的对象的限制,这在模仿学习中是一个常见的情况,而操纵手通常会部分遮住对象。当前,缺乏数据集可以使这些条件的稳健6D姿势估计方法开发。为了克服这个问题,我们收集了一个新的数据集(IMITROB),该数据集针对模仿学习和其他人类持有工具并执行任务的其他应用中的6D姿势估计。该数据集包含三个不同工具和六个操纵任务的图像序列,这些任务具有两个相机观点,四个人类受试者和左/右手。每个图像都伴随着由HTC Vive运动跟踪设备获得的6D对象姿势的准确地面真相测量。通过训练和评估各种设置中的最新6D对象估计方法(DOPE)来证明数据集的使用。数据集和代码可在http://imitrob.ciirc.cvut.cz/imitrobdataset.php上公开获得。
translated by 谷歌翻译
Industry 4.0 envisions Cyber-Physical Production Systems (CPPSs) to foster adaptive production of mass-customizable products. Manufacturing approaches based on capabilities and skills aim to support this adaptability by encapsulating machine functions and decoupling them from specific production processes. At the 2022 IEEE conference on Emerging Technologies and Factory Automation (ETFA), a special session on capability- and skill-based manufacturing is hosted for the fourth time. However, an overview on capability- and skill based systems in factory automation and manufacturing systems is missing. This paper aims to provide such an overview and give insights to this particular field of research. We conducted a concise literature survey of papers covering the topics of capabilities and skills in manufacturing from the last ten years of the ETFA conference. We found 247 papers with a notion on capabilities and skills and identified and analyzed 34 relevant papers which met this survey's inclusion criteria. In this paper, we provide (i) an overview of the research field, (ii) an analysis of the characteristics of capabilities and skills, and (iii) a discussion on gaps and opportunities.
translated by 谷歌翻译